32 research outputs found
Water Based Inkjet Material Deposition Of Donor-Acceptor Nanoparticles For Usage In Organic Photovoltaics
Significant efficiency increases are being made for bulk heterojunction organic photovoltaic prototype devices with world records at 11%. However the chlorinated solvents most frequently used in prototype manufacture would cause local health and safety concerns or large scale environmental pollution upon expansion of these techniques for commercialization. Moreover, research to bridge prototype and large-scale production of these solar cells is still in its infancy. Most prototype devices are made in inert glove box environments using spin-coating. There is a need to develop a non-toxic ink and incorporate it into a material deposition system that can be used in mass production.
In this thesis, P3HT:PCBM organic photovoltaic devices were fabricated with the help of inkjet printing. P3HT:PCBM blends were dissolved in organic solvent systems , and this solution was used as the ink for the printer. The coffee-ring effect as well as the effect of inkjet printing parameters on film formation were highlighted - thus the inkjet printing method was validated as a stepping stone between lab-scale production of OPVs and large-scale roll-to-roll manufacturing.
To address the need of a non-toxic ink, P3HT:PCBM blends were then dispersed in water, using the miniemulsion method. The nanoparticles were characterized for their size, as well as the blending between the P3HT and PCBM within the nanoparticle. These dispersions were then converted into inks. Finally, these nanoparticle inks were inkjet-printed to fabricate OPV devices.
Based on the results obtained here, tentative next steps have been outlined in order to improve upon this research work, in the future
Linking electronic structure calculations to generalized stacking fault energies in multicomponent alloys
The generalized stacking fault energy is a key ingredient to mesoscale models
of dislocations. Here we develop an approach to quantify the dependence of
generalized stacking fault energies on the degree of chemical disorder in
multicomponent alloys. We introduce the notion of a "configurationally-resolved
planar fault" (CRPF) energy and extend the cluster expansion method from alloy
theory to express the CRPF as a function of chemical occupation variables of
sites surrounding the fault. We apply the approach to explore the composition
and temperature dependence of the unstable stacking fault energy (USF) in
binary Mo-Nb alloys. First-principles calculations are used to parameterize a
formation energy and CRPF cluster expansion. Monte Carlo simulations show that
the distribution of USF energies is significantly affected by chemical
composition and temperature. The formalism can be applied to any multicomponent
alloy and will enable the development of rigorous models for deformation
mechanisms in high-entropy alloys
Contextual Language Model Adaptation for Conversational Agents
Statistical language models (LM) play a key role in Automatic Speech
Recognition (ASR) systems used by conversational agents. These ASR systems
should provide a high accuracy under a variety of speaking styles, domains,
vocabulary and argots. In this paper, we present a DNN-based method to adapt
the LM to each user-agent interaction based on generalized contextual
information, by predicting an optimal, context-dependent set of LM
interpolation weights. We show that this framework for contextual adaptation
provides accuracy improvements under different possible mixture LM partitions
that are relevant for both (1) Goal-oriented conversational agents where it's
natural to partition the data by the requested application and for (2) Non-goal
oriented conversational agents where the data can be partitioned using topic
labels that come from predictions of a topic classifier. We obtain a relative
WER improvement of 3% with a 1-pass decoding strategy and 6% in a 2-pass
decoding framework, over an unadapted model. We also show up to a 15% relative
improvement in recognizing named entities which is of significant value for
conversational ASR systems.Comment: Interspeech 2018 (accepted
Speech To Semantics: Improve ASR and NLU Jointly via All-Neural Interfaces
We consider the problem of spoken language understanding (SLU) of extracting
natural language intents and associated slot arguments or named entities from
speech that is primarily directed at voice assistants. Such a system subsumes
both automatic speech recognition (ASR) as well as natural language
understanding (NLU). An end-to-end joint SLU model can be built to a required
specification opening up the opportunity to deploy on hardware constrained
scenarios like devices enabling voice assistants to work offline, in a privacy
preserving manner, whilst also reducing server costs.
We first present models that extract utterance intent directly from speech
without intermediate text output. We then present a compositional model, which
generates the transcript using the Listen Attend Spell ASR system and then
extracts interpretation using a neural NLU model. Finally, we contrast these
methods to a jointly trained end-to-end joint SLU model, consisting of ASR and
NLU subsystems which are connected by a neural network based interface instead
of text, that produces transcripts as well as NLU interpretation. We show that
the jointly trained model shows improvements to ASR incorporating semantic
information from NLU and also improves NLU by exposing it to ASR confusion
encoded in the hidden layer.Comment: Proceedings of INTERSPEEC
Max-Pooling Loss Training of Long Short-Term Memory Networks for Small-Footprint Keyword Spotting
We propose a max-pooling based loss function for training Long Short-Term
Memory (LSTM) networks for small-footprint keyword spotting (KWS), with low
CPU, memory, and latency requirements. The max-pooling loss training can be
further guided by initializing with a cross-entropy loss trained network. A
posterior smoothing based evaluation approach is employed to measure keyword
spotting performance. Our experimental results show that LSTM models trained
using cross-entropy loss or max-pooling loss outperform a cross-entropy loss
trained baseline feed-forward Deep Neural Network (DNN). In addition,
max-pooling loss trained LSTM with randomly initialized network performs better
compared to cross-entropy loss trained LSTM. Finally, the max-pooling loss
trained LSTM initialized with a cross-entropy pre-trained network shows the
best performance, which yields relative reduction compared to baseline
feed-forward DNN in Area Under the Curve (AUC) measure